Webinar

Elections in the AI Era

Thursday, October 26, 2023
Speakers

Secretary of State, State of Michigan

Executive Director and Founder, Center for AI and Digital Policy

Presider

Vice President for National Program and Outreach, Council on Foreign Relations

Jocelyn Benson, Michigan secretary of state, and Marc Rotenberg, executive director and founder of the Center for AI and Digital Policy, discuss how officials can prepare for challenges posed by AI in U.S. elections. A question-and-answer session follow their opening remarks.

TRANSCRIPT

FASKIANOS: Thank you. Welcome to the Council on Foreign Relations State and Local Officials Webinar. I’m Irina Faskianos, vice president of the National Program and Outreach at CFR. 

CFR is an independent and nonpartisan membership organization, think tank, and publisher focused on U.S. foreign policy. We’re also the publisher of Foreign Affairs magazine. And as always, CFR takes no institutional positions on matters of policy. Through our State and Local Officials Initiative, CFR serves as a resource on international issues affecting the priorities and agendas of state and local governments by providing analysis on a wide range of policy topics.

We’re delighted to have close to five hundred participants from fifty-one states and U.S. territories for today’s conversation, which is on the record. And we will share the video and transcript after the fact at CFR.org. We are pleased to have Secretary Jocelyn Benson and Professor Marc Rotenberg with us today to talk about “Elections in the AI Era.”

And Secretary Benson is joining us from a car. She’s very busy, so we are happy—thank you very much for doing this. She is currently serving as Michigan’s forty-third secretary of state. She has received multiple national awards for her work to ensure the security and fairness of Michigan’s 2020 and 2022 general elections. Secretary Benson recently launched Michigan’s Truth Tellers Task Force, comprised of community leaders who speak with voters about their election and misinformation concerns, build trust before and after elections, and provide transparency in the electoral process.

Professor Marc Rotenberg is an adjunct professor of privacy law at Georgetown University, and the founder and executive director of the Center for AI and Digital Policy. He has served as an expert advisor on artificial intelligence to many international panels, organizations, and Congress. And in 2018, he helped draft the Universal Guidelines for Artificial Intelligence, a widely endorsed human-rights framework for the regulation of artificial intelligence. So thank you both for being with us.

Secretary Benson, I will begin with you, if you could tell us about the threats you see as you look ahead to the 2024 elections and what steps you are taking and you would share with other election officials to ensure that elections are secure and accessible, especially given these new challenges posed by AI.

BENSON: Yes. Thank you for having me. And it’s a critical discussion on a number of fronts.
I would say, you know, since the 2020 election, the—American voters and election administrators American democracy writ large has been through a lot. And we’ve also learned a lot, as well, about who our adversaries are, what their tactics are or will be, and what their goals are. And so we’re going to leverage a lot of that intelligence, that information to prepare for on all fronts to protect and secure our elections in 2024.

But there are three sort of emerging issues that are new on our table that we are particularly concerned about.
One, of course, are emerging technologies, which I know we’re going to talk about—artificial intelligence, the newness of it, the new frontier, and all of the panoply of possibilities it creates for election malfeasance, and interference, and confusion.
Second is the additional collapse, I guess you could say, of social media and other ways of getting information or spreading misinformation, which has opened new doors and more doors than ever before for the spread of the misinformation that particularly is enabled by the new and emerging technologies that heretofore—up to this point are highly unregulated in terms of their usage.

And then the third piece—and this is, to me, the most important element of this—is that our adversaries—the adversaries to democracy, I would say—have more of an incentive than ever before—than they did in 2016 and 2020 and 2022—to actually interfere with our elections, because the outcome of the presidential election in America in 2024 will have a direct impact on wars against democracy overseas, particularly in Ukraine. And so Russia, Iran, China have a greater incentive than ever before to try to influence our elections process.

So new or perhaps more highly incentivized adversaries; new and emerging technologies; and a collapse of trusted sources of information, and—i.e., social media in particular, have set us up to have a real challenge in 2024. But the good news is we’re ready. We’ve been through this in many ways in 2020, overseeing in Michigan the highest-turnout election and a successful, secure election in the midst of a pandemic. So we have the tools and resources and the brainpower and grit, frankly, to overcome these new and emerging issues that are going to plague our elections in ’24, but it’s not going to be easy. We have to be preparing and planning now with all partners a whole-of-society approach, from academia to the federal government, executive branches in every—to local governments in every state, even candidates and voters who need to be empowered and prepared to ensure that the, let’s—for lack of better words, weapons of choice for our adversaries to interfere with our elections, be it technology, AI, misinformation writ large, are all totally unsuccessful.

And so I’m happy to talk about what we’re doing on that front to ensure they’re not successful, but I’ll—and I’ll emphasize, too—I’m sure we’ll go into greater depths on that in our discussion, but I’ll just emphasize two things at the out front.
One, we know the goals of adversaries to democracy, and particularly American democracy, are three things: to create confusion—confuse voters, cause them to disengage; to create chaos—chaos that would, similarly, cause citizens to say I want to give up on democracy altogether, it’s not working; and to create fear—if I vote, if I participate, something bad might happen. So chaos, confusion, and fear are the goals.

So our response needs to be rooted in giving the American citizens, no matter who they vote for, confidence, clarity, and certainty that democracy will work, and our elections will be secure, and that their vote will count. And we have to back that up with real action at the federal and state level on all fronts to recognize that in everything we do we have to give voters confidence; we have to create clarity as to what happens when you vote, how to trust you vote and all the rest; and ensure certainty on the outcomes and on the procedures in voting as well.

The last thing I’ll mention on all those things is that the messengers—developing stronger, trusted messengers of the truthful information about our elections—how to vote, how to trust our results, how to respond to misinformation—that’s going to be crucial as we enter into this election season. And that’s a role that everyone can play on various forms. And so I’m sure we’ll talk a little bit more about that, but proactively educating voters through trusted messages is our best antidote to misinformation that will flow into our communities in the months ahead, and I know you all have—doing that.

FASKIANOS: Thank you very much. That was great.

Marc, over to you to give your perspective.

ROTENBERG: Well, thank you very much, Irina. Nice to be with you, Secretary.

I think this is a great topic that we’re discussing today. I was listening to Senator Schumer earlier. He was describing the AI agenda in the U.S. Senate and the various bills that have been introduced, but he said the one issue that we need to prioritize now concerns AI and election security because 2024 is such an important year—which is true, by the way, not only in the United States, but also around the world. You have elections in Mexico, in the EU, and India, and elsewhere. So people are very focused on this issue.

I also want to say that me and my team at the Center for AI and Digital Policy have been working with several international organizations over the last few years to help develop frameworks for the governance of AI. And the phrase that you see reappear in many of these governance frameworks is trustworthy. People want to ensure that there are standards and norms established to ensure trustworthy AI. But with the emergence of generative AI, which is essentially an elaborate statistical technique for inferring outcomes based on large datasets, if I might put it that way, we’re also creating new tools that we don’t fully understand even in the creation of the voice and the—and the video and the text. Campaigns are now experimenting with these tools, but they’re not entirely sure what the consequences will be. And I think that should give us pause.

It actually reminds me a bit of the early days when people were talking about online voting. Online voting continues to raise, of course, a lot of concerns for American voters about security and reliability, and we need that careful technical assessment to determine how best to manage some of these techniques that are now being deployed.

I think the secretary makes an important point also when we think about these new techniques and some of the vulnerabilities it’s not simply in the context of trying to influence others as to how someone may vote. As a general matter, we don’t, you know, oppose the use of radio and television or the internet to get political views out, but it’s an entirely different matter when you’re dealing with foreign adversaries whose goal may simply be to disrupt an election, to reduce public trust, to create outcomes where democratic states are less confident in their own governments. And so I think we need to be looking at this challenge through that lens as well.

Now, we have sent statements and recommendations to the—to the Senate Rules Committee, to Chairman Klobuchar and Ranking Member Fischer. We’ve made comments recently to the FEC regarding a proposed rule to extend some of the limitations on campaign advertising to include, for example, the deceptive use of generative AI techniques, because it is remarkably easy now with some of these tools to have your political opponent speak words that, in fact, she never said. And this can be done quickly, it can be done at scale, it can even be personalized. I’ve been reading, for example, about one of the key techniques with chatbots is the ability to engage in a profile-based dialogue where you know something about the individual, you engage with them online, and you continue a conversation with the aim of trying to persuade them as to an outcome.

Now, some, you know, experts in the campaign field will say, well, of course, you know, over many years campaigns have learned to target certain groups based on certain interests, so of course that’s a familiar strategy. But what I don’t think people fully appreciate is that’s essentially a once-in-time communication, whether it arrives as an email, or a text message, or a pamphlet under the door. What ChatGPT makes possible is actually the ongoing engagement with the voter, with the aim of trying to persuade and doing so in a way that we’ve never seen before in elections. So I think state election officials are going to have a lot of challenges ahead as they try to assess these new techniques.

One of the themes that we see in many of the governance frameworks for artificial intelligence is a strong emphasis on transparency. People should know when they’re interacting with an AI tool. They should know the source of the message so that they can verify it if they need be. And I would even, you know, propose a bit of a warning to the campaigns that are experimenting with these techniques. I would encourage you to really make sure you fully assess the outputs that the techniques are generating because we have seen in the last several months, of course, many of the leading experts in computer science working closely with generative AI techniques have actually been surprised—surprised by the results that they can produce that weren’t predicted and maybe in some instances a bit troubling.

I will say also that we raised some of these issues earlier this year in a complaint that our organization filed with the U.S. Federal Trade Commission regarding OpenAI’s product ChatGPT. We pointed to some of the risks regarding the use of ChatGPT in elections. I’ve done similar complaints to the FTC in the past, but what was striking about this particular complaint is that OpenAI itself, in its technical self-assessment known as the system card, actually describes the risks—the risks of using ChatGPT for influence operations, not only the profiling and targeting but the ability to use the technique to disseminate misinformation.

And so I think we’re in a moment, as I said, where we’re going to face new challenges. I think the work of election officials is about to enter a new phase. And the phrase I hear oftentimes from the experts in the field is that what we’re seeing is a rapid acceleration at scale, so the conversation we are having today may not even be the conversation we’re going to be having in spring of 2024 as we get closer to the elections. I would recommend, as I said earlier, more transparency, more accountability, notification as appropriate, and also of course training election officials to put in place the necessary safeguards and to identify the risks that could arise from the use of AI techniques for misinformation and disinformation.

Trust is absolutely central to the democratic process, and we need the ability to ensure that the outcomes that are produced through our elections are outcomes the public accepts and respects. It’s not at all clear in this—at this moment that generative AI is going to take us closer to that goal, and I think that is the challenge.

FASKIANOS: Thank you so much.

So we’re going to go to all of you now for questions and comments. So you can raise your hand, you can write your question in the box, and I will read it. Secretary Benson has to depart in ten minutes or so, so let’s prioritize raised hands for questions for her and then we’ll go to Professor Rotenberg as well.

But while we’re waiting for questions to queue up, Secretary Benson, you talked a little—you said you would talk in the discussion about what—the steps you’re taking to protect elections. Maybe you could share some of the things you’re putting in place, especially given what Professor Rotenberg has said.

BENSON: Yes. I think the protections—and we need legislative changes in order to do a lot of them. But the protections, there’s a few that are legislative and one that is administrative.

On the legislative front, you know, we recognize that the federal government is proposing legislation to protect elections from deceptive AI. We’re working with and to look at what states like Minnesota and others have done to develop two bills in Michigan or two policies in Michigan, one that will require instant disclosure or disclaimers anytime any type of communication involves AI, period, whether it’s deceptively used or not. And so disclosures and disclaimers is one. And then criminal penalties for when there is AI utilized in an effort to deceive, particularly to intentionally deceive voters about an election policy, or how to vote, or when to vote. You know, if we see—or whether an election is canceled, or election-related information. So criminalizing or providing criminal penalties for individuals who intentionally use AI to deceive voters about their rights, about the actions they can take to vote, and then also getting legislation that would require disclaimers and—of any use of AI communication on any political communication as even just issue discussions.

So those two pieces of legislation have already been introduced. They’re working their way through the legislature. And other states are following suit. Minnesota has also just passed or was considering some legislation. And of course, the Rules Committee has also had a hearing in the U.S. Senate on similar legislation.

The third thing we need to do is prepare voters to know that this is coming. And while we typically are educating voters about the whens and the hows and the—and the whats to actually cast your ballot—where to get your ballot, where to return your ballot, what early voting is, how to register to vote—we now have to add a component to all of our voter-education pieces through all of the voices—the myriad of voices from faith leaders to community leaders to sports leaders to educational leaders about AI and its potential harmful effects on democracy. We have to empower and equip voters with the information they need ahead of time so that when AI does land—and we have to anticipate it will—that we can—that they’re equipped and empowered to know what it is and to not trust it, in a way, and to be critical consumers of information in this election cycle.

So I think those three things will put us—along with working with our partners in the federal government to ensure that we’re partnering with CISA and the other agencies for any foreign interference and all of that, and consequences there—will also continue to be at the forefront.

And the last thing I’ll emphasize on that is that we—you know, federal law and the laws coming out of Congress would apply specifically to federal candidates, not to state candidates, not to local candidates. And so just because the federal government is acting or does act on this front, it doesn’t—every state should still be looking at it. It doesn’t, you know, absolve the states for the role and even local governments in the role we need to play in enacting disclosure/disclaimer regulations, as well as criminal penalties for those who would seek to deceive voters about their rights.

FASKIANOS: OK. Thank you.

I’m going to go take the first question from Councilmember John Jaszewski, who has raised his hand, if you can accept the unmute prompt.

Q: Is that it? Did I get it?

FASKIANOS: Yes.

Q: OK.

My question is simple. You know, the bad actors speak the loudest. They have the loudest megaphones and therefore seem to dominate sometimes the discussion. Are there any specific things that local officials like myself, small communities, can do to, you know, overpower that bad actor, that loud voice from the bad actors?

BENSON: Yes, there are. In fact, I would argue you have perhaps one of the most critical roles to play, because you are as a local official closer to the ground, real—there in real-time, and a trusted voice in your community. So, one, I would say preemptive or proactively preparing your constituents to know this is coming, to know what they can do to report it, provide a way that they can report it. We have in Michigan a portal to report any type of misinformation, including AI-driven misinformation, which is an email and a portal on our website that people can use to report it so that we know about it and we can respond to it as well and debunk it.

But you are a connector to the citizens who are targeted by deceptive AI. And you can use that connection and your trusted voice to them to help them be prepared now for what’s coming. Have a town hall talking about this. Invite other experts to talk about what AI is going to mean for every voter, how to intercept AI on social media, how to push for the state and federal changes we need to require disclaimers, and disclosures, and criminal penalties. And then, you know, be there throughout the cycle to identify AI, report it, call it out, and equip voters to do the same so that we proactively are able to be ready for when those hits come. And through education and empowerment, and clear to-dos of what to do when it when it hits, have everyone part of intercepting AI before it can have its intended impact of deceiving voters and causing chaos and confusion around elections.

FASKIANOS: Thank you. I’m going to take the next question from David Burnett.

Q: Thank you. We were talking about deceptive practices with AI. We’ve already seen an example where one candidate for president used an AI-generated voice to verbalize a statement of his opponent, which the opponent had made but in written form only. So that’s a bit of a nuance of whether or not that’s actually deceptive to use for audio ads, something that the candidate did say but not verbally.

BENSON: I think that’s why it speaks to the need to have a disclaimer and disclosure for the use of any AI, so the people know that artificial intelligence has been used to make this commercial. That is our first step in equipping voters. Not getting involved in the intent or deception sort of piece when AI is used, and giving voters that basic information. And certainly, the criminal penalties involved in intentionally using it can come later, or as the process plays out through, you know, accusations of intent and all the rest. But I would argue the example you gave should have a disclaimer on it for voters to know at the very least that this was AI-generated audio.

FASKIANOS: Thank you.

Next, I’m going to take a written question from County Commissioner Nikki Koons: Can either speaker go more in-depth on how AI would be able to be used in an election process? I live in a very rural area and am not hooked up to the internet during voting. Are we just talking about AI generating any type of information prior to the election?

BENSON: I think I mean—go ahead.

ROTENBERG: Well, I was just going to say that even if you don’t have internet-based voting, you likely do have internet-connected voters who are receiving advertising and communications online. And, of course, campaign ads through internet websites are one of the most popular ways today in the United States to reach voters. And I think you should anticipate that those online campaign ads will reflect some of these new generative AI techniques. And, again, it will sort of maximize the opportunity for misinformation that’s also highly targeted, because you take the profile of voters that you know and you’re able to extend that over a period of time, which was not something that was possible in the past.

FASKIANOS: Secretary Benson?

BENSON: Yes. Yeah, so I certainly think and we’re anticipating the use of AI to be part of what interferes with election processes as well. This could include the creation and spread of localized misinformation on election day, putting out information saying there’s—you know, falsified—making falsified claims conditions at the polls, even perhaps suggesting violence in certain precincts as a way of deterring—you know, falsely claiming that—as a way of deterring people from showing up to vote. Knowing that there are a handful of states, and in particular you could argue in Wisconsin and others even just a handful of precincts, that could directly influence the outcome of the presidential election. 
There’s a way to potentially target different areas with that misinformation to drive voters away and to dissuade voters from showing up to vote at all with misinformation around wait times, closures of polling places, and other types of things. You should see it as a potential voter suppression tactic that can be easily deployed on election day. That is where it’s most likely to interfere with the operations of elections itself, and why we need to, in some cases, have an operation in place to rapidly respond to information that gets out there. 

And we already do that in Michigan in general, because misinformation can be generated by humans on social media. And so we already have a rapid response network in place to identify, source, and respond to misinformation about polls closing or violence at the polls. And so we have to expand that in that regard. But we certainly have to anticipate that the use of AI could, even if it’s not—and this has not really been the tactic of foreign adversaries anyway, to actually interfere with the hardware of elections. It could interfere with the people that make elections, both voters and election administrators, come to fruition. And that’s going to be their most likely target. And it’s going to be a target that is about creating that chaos and confusion and fear among voters, and even among election workers who may fear threats that could interfere with their ability to oversee a presidential election.

FASKIANOS: Thank you. I’m going to go next to Lanette Frazier.

Q: Thank you so much.

I would like to know, as a city councilperson, what—is there already some kind of verbiage out there that we can use as far as language to create a disclosure disclaimer scenario, and also some kind of legislation to criminalize AI that we can use instead of starting from scratch?

BENSON: Yes, I’m happy to—I’m happy to share with you the information of the bills we’ve introduced in Michigan. There’s federal legislation as well that has language that we borrowed from. And Minnesota and a few other states have already proposed or enacted legislation. So there are sample bills out there. And perhaps—I’m happy to get them to you in terms of what we’re doing in Michigan through CFR. And I hope, you know, organizations like NCSL and others that typically compile these model laws and policies will also include this in their portfolio. But in the meantime, we’re happy to get you the language that we’re using in Michigan.

FASKIANOS: Fantastic. And we can share that with the whole group, as well as a contact at NCSL with whom we work.
I’m going to go next to Christian Amez.

Q: All right, so just kind of piggybacking on that previous question but a little bit more detail. You know, what kind of, you know, criminal penalties are—you know, should be included in state legislation to kind of prevent people from using generative AI, deep fakes, et cetera during these elections? Because, you know, one thing that I’m worried about is that the sort of penalties are so low or just become the cost of doing business when it comes to running elections and campaigns. You know, how do we—what’s the bar that we should set to make sure that it becomes prohibitive to use this sort of technology in a misleading way?

BENSON: Think if you have identified—if anyone has identified that bar, let us know. I mean, we’re in a new frontier here. You know, certainly, this is a new, emerging, advancing technology that is advancing so quickly. As was said earlier, what even what we know is possible now could change in six months. And the collection of ways in which bad actors can take advantage of this new and emerging technology, particularly in the time where they have such a high incentive to do so, is in many ways limitless. So we know that disclosure of, you know, particularly AI-generated deep fakes and other content that could mislead voters is one of our—you know, disclaimers on that information is one of the ways that we can equip voters with the information they’re going to need to know what to trust and what not to trust. And then certainly penalties for intent to deceive are going to be evolving as the attempts to deceive evolve as well. 

So I think the most important thing in this moment is that we prioritize to collectively solve this problem, which we’re starting to do. And I think Senator Schumer’s comments particularly are really well taken and appreciated. And it seems to be a bipartisan prioritization. And then to create, as we’re doing in Michigan, expert-driven workgroups to help us track the evolving and emerging threats as the technology evolves and emerges, knowing that, again, our information, and even the use of the technology is going to be very different, likely, one year from now when the election is happening than it is even right now. So we have to build up an infrastructure to adjust and adapt as well and evolve and develop new solutions as we go along.

FASKIANOS: Fantastic. And just to say, Lanette was from Arkansas and our last question came from New York.
I’m going to go next to a written question from Mike Zoril, who’s a supervisor in Rock County, Wisconsin: Can AI algorithms be developed to both detect voter fraud in real-time by analyzing patterns in voting data, and also allow citizens to verify that their vote was counted accurately? Marc, I don’t know if you can speak to that. 

ROTENBERG: Right. So that is actually a very interesting field right now, particularly with regard to generative AI and establishing techniques for watermarking. There were at the White House now I think fifteen companies that have pledged to incorporate watermarking in the creation of generative AI output, which includes the text and the audio. I’m not quite sure how it’s going to work with audio, but also video. And that can reduce the likelihood of using generative AI without detection. But, of course, there will always be countermeasures. And people will try to evade detection on try to evade some of the compliance requirements.

There is a phrase I’d like to just share with everybody that that we have used repeatedly to try to help people understand what the unique challenge is with generative AI in the election context and other contexts as well. We say that generative AI can both mimic and manipulate human behavior. Which is different, I would say, from how we’ve oftentimes thought of computer systems and simple issues, let’s say, of security or accurate vote tabulation. Because with a generative AI, you’re now producing a text, a voice message, that sounds familiar and sounds convincing. And once that conversation and point of contact is created, then there’s the opportunity for persuasion. And that’s what I think election officials need to be on the lookout for.

FASKIANOS: Thank you.

OK, so with that I’m going to—we’re going to keep going with Professor Rotenberg, but Secretary Benson does have to go. She’s on her way to an appointment. So I want to thank you, Secretary Benson, for your time with us today, for all that you have done in defending democracy for the country. And we will circulate the resources that you mentioned to the group. So, again, thank you for being with us today. And we really appreciate you wedging this into your very busy schedule. 

BENSON: Thank you, all. Thank you. It’s been wonderful. Thank you for this discussion. It’s so important. And I look forward to more discussions in the future. Thank you.

FASKIANOS: Thank you.

All right, Marc, we are going to continue on with you. And I’m going to next go to—let me see—John Bouvier. And if you could identify yourself, that would be great.

Q: Yes, thank you. My name is John Bouvier. I’m a councilman with the town of Southampton.

And my question—I’m an engineer, so forgive me for thinking in the other direction. But I see that AI potentially has a use in voter verification and particularly in the technology of collecting votes. And it seems to be at the discretion of a lot of different boards of elections across the country, on who they hire, what equipment they use. I’m wondering how that’s—how we protect against that in the sense that AI could be—could be a useful tool in that respect. But how do you guard against its misuse, particularly when it’s being used in equipment and computer systems that are identifying voters’ signature recognition, all that kind of thing? And it’s my push to standardize, I think, to make a standard on how that’s done, because it seems to be at such local discretion that the voter is sort of left in a state of mistrust as a result of things like AI as well.

ROTENBERG: John, it’s a great question. I think it actually takes us back to some of the foundational issues around election integrity and the importance, for example, of using paper ballots that can be properly tabulated, or retabulated if necessary. Many of the concerns with digital voting, I suspect, will be amplified with AI, because there’s more opportunity for manipulation and for mischief. So there is a lot to be said for voter verification techniques that rely on, you know, paper documents. Not to discourage some of the more innovative approaches I know states are taking, but to have that as a backup and a reliable source of voter identification I think is a good foundation. And I think it’s also key to how we think about the vote tabulation process itself.

FASKIANOS: Great.

Let’s go next to—I’m going to go to City Attorney Carrie Daggett’s question for the city of Fort Collins in Colorado: What kind of practical suggestions do you have for detecting, proving, and enforcing the requirements for disclosure and prohibiting the use of AI?

ROTENBERG: Yeah, not a simple question you just asked. I think, you know, and I’m saying this only half-joking, because, you know, many of the leading computer scientists have actually said that we need a six-month pause on this technology precisely because they can’t fully audit and assess the outcomes that are being produced. So I think—and maybe the secretary has some further information on this or NCSL, which is a very good resource, can provide some technical support. I do think it will be helpful for state and local election officials to establish your online presence as a trusted source of information so that as these questions arise if you need to flag, for example, particular campaign communications, there’s a way to convey that information to the public from a familiar source and from a trusted source. Because, of course, part of manipulation also is the effort that people sometimes make to present trusted sources.

FASKIANOS: OK.

I’m going to go next to—we’re coming to the end of our time, and so many questions. But I’m going to go next to Tim Duey, who works in the office of Senator Kathleen Kauth in Nebraska. Let’s see: I’m glad we’re discussing the dangers of AI abuse in elections, because it does seem like a big problem. I did want to ask though, couldn’t this be a game-changer for candidates in local down-ballot races, where there’s one person who needs to engage with many thousands of voters, if AI isn’t used deceptively or in an unscrupulous manner?

MR. ROTENBERG: I’m sorry. I didn’t hear the question.

FASKIANOS: Could it be used productively for a candidate in a local down-ballot who might not have the resources to, you know, get the word out about their campaign? Could AI take the place of volunteers to communicate their platform to voters that they would have never been able to reach otherwise?

ROTENBERG: Yes. So, you know, it’s a good exercise, actually, to go online and try ChatGPT, and say: Draft talking points for me as a candidate. I intend to emphasize these three issues and here are the constituencies I’m trying to persuade and see the output. I think you’ll be surprised and impressed. As a first draft, the conversational AI models actually produce very good text. But over time, you know, they need to be interrogated and the risk of misinformation needs to be addressed.

FASKIANOS: Thank you.
I’m going to take, I think, probably the final question from Evan Collier, a raised hand. Go ahead, you’re unmuted now. I think we’ll be able to hear you, hopefully. OK, we seem to be having technical difficulties.
Maybe we can go next to Commissioner Kevin Boozel from Pennsylvania.

Q: Can you hear me?

FASKIANOS: Yes.

Q: OK, perfect. You know, this has been a wonderful session, I got to say. My name is Kevin Boozel. I’m Butler County Commissioner in Pennsylvania. And I also sit as chairman of our County Commissioners Association.
Elections have been extremely, extremely hostile in last couple of years. This is not going to help us. I appreciate the fact that the policies are coming from the federals. I would like to have those sent to us so that we can advocate for them for all of the counties. And, you know, this AI is beyond probably 90 percent of our brains. This is not something we deal with every day. But I think that the literature, educating the public of what to look for. You know, I’m on social media a lot. That’s about as far as I go. But a lot of the misinformation is on there, and then it’s removed. But it says: removed because inaccurate information, or something on the social media.

I’m assuming that that’s because of bad actors. I don’t know that for sure. Some people get what’s called Facebook jail, they’re not allowed to post anything for a while. And this is all interactive. So how do you—how do you manage all these social media sites with one system, I guess is my question? And how do we hold people accountable? Because they created such a stir in our elections office in 2020. They over-sent out the applications to people, potentially from third parties. You know, it’s so hard to get a hold of this stuff. And I think AI is generating a lot of this mail in some fashion. I could be wrong. I don’t know. But this mail that people do. And maybe it’s just bad actors doing it by hand. I don’t know. But I just looking for policy. And I’m looking for the information that I can hand to people that are questioning what—the accuracy, again, and what they should do about it. How do they prove it, or how can they report it?

ROTENBERG: Well, Commissioner, I have good news and I have bad news. I mean, the good news is I see that Andrew Morgan just posted in the chat the new report from NCSL on AI regulation in the state. So I’m sure that’s going to be a useful resource. But the bad news is that, you know, all of the major companies are dealing now with the—with the concerns around misinformation being amplified through generative AI. And the focus is, of course, the electoral process, because that will be the target for so many deployments over the next year. So you are literally on the front lines. I do think some of the challenges are going to be new and different. And you’ll need to communicate rapidly with your colleagues as these issues emerge. That’s certainly what we’ve experienced over the past year working on AI policy with the computer scientists. And they continue to be astounded. So I imagine there are a few surprises ahead.

FASKIANOS: Marc, are there any closing thoughts you want to leave us with?

ROTENBERG: Well, as I said at the outset, we’ve done a lot of work with international organizations on AI policy frameworks. And I don’t think there’s any doubt that ensuring trustworthy AI systems is the essential goal. The governance frameworks, the regulatory standards, the technical standards all need to be pursued with that as the—as the central mission. And, you know, it’s very encouraging, listening to the secretary and hearing these questions, the work that’s already underway. And I just, you know, wish you the best, because I know there will be some challenges ahead.

FASKIANOS: Fantastic. So thank you, Professor Marc Rotenberg and Secretary Benson, who had to leave.
Again, we will share this recording as well as the transcript with all of you, and some of the resources that were mentioned. You can follow Secretary Jocelyn Benson on X at @JocelynBenson, and Professor Marc Rotenberg at @MarcRotenberg. And, as always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for the latest developments and analysis on international trends and how they’re affecting the United States. And, as always, we welcome your suggestions. You can email [email protected], as well as thoughts, feedback, and anything else. We always love hearing from you.

And thanks for all the important work that you are doing in your communities. This is—it really takes everybody putting their hand in this to make sure that we safeguard our democracy. So thank you again to everybody.

ROTENBERG: Thank you

END

Top Stories on CFR

NATO (North Atlantic Treaty Organization)

The war in Ukraine marks a new era of instability in Europe. Countering Russia’s efforts will require a stronger, more coordinated NATO.

China

After the rise of Chinese power during the 2010s and failed U.S. policies in the Indo-Pacific, the United States should renew the Pivot to Asia and place the region at the center of its grand strategy.*

France

Far-right advances in the European Parliament elections have destabilized politics in France, a longstanding pillar of the European Union, and highlighted fault lines in the bloc.